210 research outputs found

    Treatment planning comparison for head and neck cancer between photon, proton, and combined proton-photon therapy - from a fixed beam line to an arc.

    Get PDF
    BACKGROUND AND PURPOSE This study investigates whether combined proton-photon therapy (CPPT) improves treatment plan quality compared to single-modality intensity-modulated radiation therapy (IMRT) or intensity-modulated proton therapy (IMPT) for head and neck cancer (HNC) patients. Different proton beam arrangements for CPPT and IMPT are compared, which could be of specific interest concerning potential future upright-positioned treatments. Furthermore, it is evaluated if CPPT benefits remain under inter-fractional anatomical changes for HNC treatments. MATERIAL AND METHODS Five HNC patients with a planning CT and multiple (4-7) repeated CTs were studied. CPPT with simultaneously optimized photon and proton fluence, single-modality IMPT, and IMRT treatment plans were optimized on the planning CT and then recalculated and reoptimized on each repeated CT. For CPPT and IMPT, plans with different degrees of freedom for the proton beams were optimized. Fixed horizontal proton beam line (FHB), gantry-like, and arc-like plans were compared. RESULTS The target coverage for CPPT without adaptation is insufficient (average V95%=88.4%), while adapted plans can recover the initial treatment plan quality for target (average V95%=95.5%) and organs-at-risk. CPPT with increased proton beam flexibility increases plan quality and reduces normal tissue complication probability of Xerostomia and Dysphagia. On average, Xerostomia NTCP reductions compared to IMRT are -2.7%/-3.4%/-5.0% for CPPT FHB/CPPT Gantry/CPPT Arc. The differences for IMPT FHB/IMPT Gantry/IMPT Arc are +0.8%/-0.9%/-4.3%. CONCLUSION CPPT for HNC needs adaptive treatments. Increasing proton beam flexibility in CPPT, either by using a gantry or an upright-positioned patient, improves treatment plan quality. However, the photon component is substantially reduced, therefore, the balance between improved plan quality and costs must be further determined

    An approach for estimating dosimetric uncertainties in deformable dose accumulation in pencil beam scanning proton therapy for lung cancer

    Get PDF
    Deformable image registration (DIR) is an important component for dose accumulation and associated clinical outcome evaluation in radiotherapy. However, the resulting deformation vector field (DVF) is subject to unavoidable discrepancies when different algorithms are applied, leading to dosimetric uncertainties of the accumulated dose. We propose here an approach for proton therapy to estimate dosimetric uncertainties as a consequence of modeled or estimated DVF uncertainties. A patient-specific DVF uncertainty model was built on the first treatment fraction, by correlating the magnitude differences of five DIR results at each voxel to the magnitude of any single reference DIR. In the following fractions, only the reference DIR needs to be applied, and DVF geometric uncertainties were estimated by this model. The associated dosimetric uncertainties were then derived by considering the estimated geometric DVF uncertainty, the dose gradient of fractional recalculated dose distribution and the direction factor from the applied reference DIR of this fraction. This estimated dose uncertainty was respectively compared to the reference dose uncertainty when different DIRs were applied individually for each dose warping. This approach was validated on seven NSCLC patients, each with nine repeated CTs. The proposed model-based method is able to achieve dose uncertainty distribution on a conservative voxel-to-voxel comparison within +/- 5% of the prescribed dose to the 'reference' dosimetric uncertainty, for 77% of the voxels in the body and 66%-98% of voxels in investigated structures. We propose a method to estimate DIR induced uncertainties in dose accumulation for proton therapy of lung tumor treatments

    Comment on ``Large-space shell-model calculations for light nuclei''

    Get PDF
    In a recent publication Zheng, Vary, and Barrett reproduced the negative quadrupole moment of Li-6 and the low-lying positive-parity states of He-5 by using a no-core shell model. In this Comment we question the meaning of these results by pointing out that the model used is inadequate for the reproduction of these properties.Comment: Latex with Revtex, 1 postscript figure in separate fil

    Neural parameters estimation for brain tumor growth modeling

    Full text link
    Understanding the dynamics of brain tumor progression is essential for optimal treatment planning. Cast in a mathematical formulation, it is typically viewed as evaluation of a system of partial differential equations, wherein the physiological processes that govern the growth of the tumor are considered. To personalize the model, i.e. find a relevant set of parameters, with respect to the tumor dynamics of a particular patient, the model is informed from empirical data, e.g., medical images obtained from diagnostic modalities, such as magnetic-resonance imaging. Existing model-observation coupling schemes require a large number of forward integrations of the biophysical model and rely on simplifying assumption on the functional form, linking the output of the model with the image information. In this work, we propose a learning-based technique for the estimation of tumor growth model parameters from medical scans. The technique allows for explicit evaluation of the posterior distribution of the parameters by sequentially training a mixture-density network, relaxing the constraint on the functional form and reducing the number of samples necessary to propagate through the forward model for the estimation. We test the method on synthetic and real scans of rats injected with brain tumors to calibrate the model and to predict tumor progression

    Time--delay autosynchronization of the spatio-temporal dynamics in resonant tunneling diodes

    Full text link
    The double barrier resonant tunneling diode exhibits complex spatio-temporal patterns including low-dimensional chaos when operated in an active external circuit. We demonstrate how autosynchronization by time--delayed feedback control can be used to select and stabilize specific current density patterns in a noninvasive way. We compare the efficiency of different control schemes involving feedback in either local spatial or global degrees of freedom. The numerically obtained Floquet exponents are explained by analytical results from linear stability analysis.Comment: 10 pages, 16 figure

    Optimal margin and edge-enhanced intensity maps in the presence of motion and uncertainty

    Get PDF
    In radiation therapy, intensity maps involving margins have long been used to counteract the effects of dose blurring arising from motion. More recently, intensity maps with increased intensity near the edge of the tumour (edge enhancements) have been studied to evaluate their ability to offset similar effects that affect tumour coverage. In this paper, we present a mathematical methodology to derive margin and edge-enhanced intensity maps that aim to provide tumour coverage while delivering minimum total dose. We show that if the tumour is at most about twice as large as the standard deviation of the blurring distribution, the optimal intensity map is a pure scaling increase of the static intensity map without any margins or edge enhancements. Otherwise, if the tumour size is roughly twice (or more) the standard deviation of motion, then margins and edge enhancements are preferred, and we present formulae to calculate the exact dimensions of these intensity maps. Furthermore, we extend our analysis to include scenarios where the parameters of the motion distribution are not known with certainty, but rather can take any value in some range. In these cases, we derive a similar threshold to determine the structure of an optimal margin intensity map.National Cancer Institute (U.S.) (grant R01-CA103904)National Cancer Institute (U.S.) (grant R01-CA118200)Natural Sciences and Engineering Research Council of Canada (NSERC)Siemens AktiengesellschaftMassachusetts Institute of Technology. Hugh Hampton Young Memorial Fund fellowshi

    Metacognition as Evidence for Evidentialism

    Get PDF
    Metacognition is the monitoring and controlling of cognitive processes. I examine the role of metacognition in ‘ordinary retrieval cases’, cases in which it is intuitive that via recollection the subject has a justified belief. Drawing on psychological research on metacognition, I argue that evidentialism has a unique, accurate prediction in each ordinary retrieval case: the subject has evidence for the proposition she justifiedly believes. But, I argue, process reliabilism has no unique, accurate predictions in these cases. I conclude that ordinary retrieval cases better support evidentialism than process reliabilism. This conclusion challenges several common assumptions. One is that non-evidentialism alone allows for a naturalized epistemology, i.e., an epistemology that is fully in accordance with scientific research and methodology. Another is that process reliabilism fares much better than evidentialism in the epistemology of memory

    The Epistemic Status of Processing Fluency as Source for Judgments of Truth

    Get PDF
    This article combines findings from cognitive psychology on the role of processing fluency in truth judgments with epistemological theory on justification of belief. We first review evidence that repeated exposure to a statement increases the subjective ease with which that statement is processed. This increased processing fluency, in turn, increases the probability that the statement is judged to be true. The basic question discussed here is whether the use of processing fluency as a cue to truth is epistemically justified. In the present analysis, based on Bayes’ Theorem, we adopt the reliable-process account of justification presented by Goldman (1986) and show that fluency is a reliable cue to truth, under the assumption that the majority of statements one has been exposed to are true. In the final section, we broaden the scope of this analysis and discuss how processing fluency as a potentially universal cue to judged truth may contribute to cultural differences in commonsense beliefs

    The home advantage over the first 20 seasons of the English Premier League: Effects of shirt colour, team ability and time trends

    Get PDF
    This study explored the relationship between teams' home shirt colour and the magnitude of the home advantage in English professional soccer. Secondary aims were to explore the consistency of the home advantage over time and the relationship between the home advantage and team ability. Archival data from 7720 matches contested over the first 20 seasons of the English Premier League were analysed. The data show that teams wearing red are more successful than teams wearing other colours, and that teams are more successful in home games than in away games (home advantage index = 0.608). The home advantage has also remained consistent over time (1992/1993-2011/2012) and is greater in low-ability teams (teams with lower league positions) than in high-ability teams. After controlling for team ability, it was found that teams opting for red shirts in their home games did not show a greater home advantage than teams opting for other colour shirts. Two possibilities for this finding are offered: (1) shirt colour is not a contributing factor to team success, or (2) changes in psychological functioning associated with viewing or wearing red stay with team members after the shirt colour has been changed. It is recommended that researchers continue to explore the effect of shirt colour on athlete and team behaviour and further explore how team ability can affect the magnitude of the home-field advantage. © 2012 International Society of Sport Psychology
    corecore